20 - Komplexität von Algorithmen [ID:10713]
50 von 422 angezeigt

So as a very short recap, what we are doing here is trying to answer the question on whether

some problems are easier than other or they are all as equally hard. To keep things a little

simpler, we are focusing on what are called decision problems, yes, no questions. So the

question is, are decision problems simpler than others? Well, you already can answer

that question, right? You know that some decision problems are decidable, some are not. So we

have at least one classification of decision problems. Interestingly, even among the undecidable

problems, you probably seen also that some of them are easier than others. They are simpler

undecidable problems than others. How can that be? Well, there are some undecidable

problems which are recursively enumerable. Did you see that notion? The idea being that

even though for some problem we cannot decide whether the answer is yes or no, we can at

least decide when the answer is yes. It's better than nothing. There are some problems

which we can't even do that. Now, then more precisely what we are looking at here is the

question of whether among the decidable problems some are simpler than others. And it seems

like a natural question to do in computer science. And as you know, the answer is not

so clear. We will give a precise answer in the upcoming weeks. We will see that in fact,

as the intuition suggests, there are simpler problems than others. But it's not always

simple to realize that. And what we are doing towards answering this question is looking

at properties that a solution to a problem can have. Properties here being the time that

one needs to solve a problem with the best algorithm or the space that we need to use

when solving this problem. And we are also using determinism to say, okay, is it possible

that there are some problems which have a simpler solution if we allow non-determinism?

There are also other dimensions one can explore in order to try to capture differences among

others. Sometimes one could ask, for instance, is this problem simple or can I get a very

good speed up if I try to solve it using many computers in parallel? Maybe some problems

seem to have that property and others don't. But we will focus on these three. Time, space

and determinism. It's not clear that only because we can define one of these classes

automatically characterizes certain complexity. We will see on Friday that there are some

complexity classes that could in principle have been disjoint but instead are the same.

We will see examples of complexity classes that collapse into one. That we will do on

Friday and on next week, I think, you will see examples of complexity classes that are

essentially different and from which we can show that there are problems which are harder

than others.

Now, you are familiar, of course, with more common complexity classes, namely P and NP.

You've seen them before. One complexity class we defined in the last lecture was the one

called L. L being the class of problems that can be solved by a Turing machine using space

that is logarithmic on the size of the input. Similarly for NL is the same for non-deterministic

Turing machine.

When one looks at this complexity class L, it seems a little arbitrary and also hard

to grasp what's the intuition behind it, what does it mean that we use a space logarithmic

on the input on a Turing machine. We never programmed Turing machines. It seems sort

of odd, but one intuition that you can use, which is a sound intuition, is this. Suppose

that your input is a string or an array, whatever, of size N. This is your input. If you have

only log N bits, these bits are enough to name, identify each position of the array.

Okay, because this array has this cell is cell N and with log N bits we can describe

up to N. So log N bits are essentially a pointer to the input or with log N bits we can describe

a pointer to the input. And one intuition that you can use is that with this restriction

space limited to C times log N, what it means is that we only have C pointers to the input,

put a constant number of pointers to the input and that's it. So if you want to think of

it in terms of a RAM machine or Java or C whatever, these are problems which you can

solve using pointer variables and that's it. You cannot use new or malloc. You cannot ask

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:32:17 Min

Aufnahmedatum

2013-06-26

Hochgeladen am

2019-04-22 17:29:03

Sprache

de-DE

  • Mathematische Hilfsmittel der Algorithmenanalyse: Abschätzung des asymptotischen Wachstums von Funktionen, Summationen, Anzahlen, divide-and-conquer-Rekursionen, etc.
  • Grundbegriffe der quantitativen Algorithmenanalyse: worst-case- und average-case-Analsyse, obere und untere Schranken, Algorithmen- und Problemkomplexität

  • Exemplarische Analysen von Sortieralgorithmen

  • Sortierkomplexität und Entropie

  • Quellcodierung und Datenkompression

  • Komplexität von arithmetischen Operationen und Problemen (Multiplikation, Primtest, Faktorisierung)

  • modulare Arithmetik und schnelle Fouriertransformation

  • Kryptographie und Komplexität

Lernziele und Kompetenzen:

Die Studierenden

  • erwerben fundierte Kenntnisse über die Grundbegriffe der quantitativen Algorithmenanalyse (Laufzeit) und die benötigten mathematischen Methoden

  • verstehen die Komplexität (Laufzeitverhalten) von Standardalgorithmen (z.B. Sortieren, arithmetische Algorithmen) und können deren praktische Bedeutung erklären

  • sind in der Lage, an einfachen, exemplarischen Algorithmen Analysen des worst-case-Verhaltens und des average-case-Verhaltens durchzuführen

  • können exemplarisch Algorithmenkomplexität und Problemkomplexität in Bezug setzen

  • können die Beziehungen zwischen Sortier- und Suchkomplexität und dem Entropiebegriff darstellen

  • erwerben Grundkenntnisse über algebraische Strukturen der Arithmetik und die Komplexität arithmetischer Operationen

  • können die Rolle von Komplexitätsaussagen für die Beurteilung der Sicherheit einfacher kryptografischer Protokoll darstellen

Literatur:

Graham, Knuth, Patashnik, Concrete Mathematics, Addison-Wesley, 1994.
Cormen, Leiserson, Rivest, Stein, Introduction to Algorithms, MIT-Press, 2001.
Heun, Grundlegende Algorithmen, Vieweg, 2001.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen